Fluency, Fluidity, and Feedback in Saarland
One of the tasks that I managed to get done while I stayed in Michigan a few weeks ago was a re-coding of the Fluidity application. This is an application I've been working on intermittently over the past two or three years. I first reported about it on this site here. The basic aim is an application for second language speakers that allows them to practice speaking in response to different tasks and then receive real-time feedback that relates to their speech fluency. Thus, if they are speaking too slowly or with many long pauses, they would receive feedback that reflects that. And of course, if they are speaking at a reasonable speed with a stable pace, they would receive feedback that lets them know so.
The newly coded version of the application now uses the JavaFX framework for a somewhat more advanced user interface. It also incorporates an avatar—which I call "Fludie"—that is designed to change it's facial expression in response to the ongoing measures of the speaker's fluency. In this sense, Fludie is giving feedback that is supposed to emulate that of a human listener: an interested expression if the speaker if fluent, a confused expression if the speaker is not.
I am still in the process of testing it, but before my sabbatical ends and I have to return to reality, I thought I'd like to consult with as many acquaintances as possible to get feedback on how to improve the application. So, I contacted Jürgen Trouvain at University of Saarland in Germany to see about making a visit for his input. I've known Jürgen for several years as our paths have crossed at numerous conferences because our interests are quite similar. He is an expert in acoustic phonetics and computational linguistics and has much interest in speech fluency and especially hesitation phenomena.
He not only welcomed my visit, but also made arrangements for me to give a talk on my work at their phonetics colloquium; hence, to a larger audience than just him. It was a very productive day, indeed! We naturally talked a lot about speech fluency and he explained about a project he's currently working on to examine just how "silent" silence really is. That is, it's surprising how much additional acoustic information can be discovered even in the midst of what one might normally label as silence. Things like tongue clicks, throat-clearing, breathing sounds that are not all the same, and other sounds related to articulatory gestures. All of this information can be useful to tweak perception of such things as sentiment, speaker's state of mind, and of course simply speech fluency.
I got to talk about Fluidity (and my related research) for quite a long time and they gave me some really helpful feedback on it. One of the aims that I have for the application is that it not be limited to English learners. I would like it to be flexible enough to be used for learners of various languages, while also being flexible enough to give accurate feedback to speakers from different language backgrounds. The members of the phonetics group had some useful comments for me on that point.
Soon, I'll be back in Japan where I hope to carry out more development work on the application as well as specific testing with English learners in Japan (as a start). But my biggest worry is going to be finding time: The sabbatical year has been a great chance to focus deeply on projects like this. Getting back to teaching and administrative work will mean figuring out how to juggle development work with all of that. I hope I can keep Fludie happy...
[Note: This post was written in September, 2020. However, in order to preserve the chronology of the blog, it has been dated to reflect when the described events actually took place.]